提出了联合学习(FL),以促进分布式环境中模型的培训。它支持(本地)数据隐私的保护,并使用本地资源进行模型培训。到目前为止,大多数研究一直致力于“核心问题”,例如机器学习算法对FL,数据隐私保护或处理客户之间不均匀数据分布的影响。此贡献锚定在实际的用例中,在这种情况下,FL将实际部署在生态系统的互联网中。因此,在文献中发现了一些流行的考虑之外,还需要考虑一些不同的问题。此外,引入了一种构建灵活和适应性的FL解决方案的体系结构。
translated by 谷歌翻译
大量注释数据的可用性是深度学习成功的支柱之一。尽管已经提供了许多大数据集进行研究,但在现实生活中通常并非如此(例如,由于GDPR或与知识产权保护有关的疑虑,公司无法共享数据)。联合学习(FL)是解决此问题的潜在解决方案,因为它可以对散布在多个节点的数据进行培训,而无需共享本地数据本身。但是,即使无法正确处理,即使是FL方法也会对数据隐私构成威胁。因此,我们提出了使用图像统计数据来改善FL方案的结果的增强方法STATMIX。使用两个神经网络体系结构,在CIFAR-10和CIFAR-100上经验测试了STATMIX。在所有FL实验中,与基线训练相比,STATMIX的应用都提高了平均准确性(不使用Statmix)。在非FL设置中也可以观察到一些改进。
translated by 谷歌翻译
联合学习的重要问题之一是如何处理不平衡的数据。该贡献引入了一种新型技术,旨在使用I-FGSM方法创建的对抗输入来处理标签偏斜的非IID数据。对抗输入指导培训过程,并允许加权联合的平均值,以更重要的是具有“选定”本地标签分布的客户。报告并分析了从图像分类任务,用于MNIST和CIFAR-10数据集的实验结果。
translated by 谷歌翻译
实践中的本体论仍然非常具有挑战性,尤其是在涉及多个本体论的情况下。此外,尽管最近进步,系统本体论质量保证的实现仍然是一个困难的问题。在这项工作中,从实际用例的角度研究了30个生物医学本体论和计算机科学本体论的质量。对交叉主体论的参考进行了特殊审查,这对于结合本体论至关重要。提出了检测潜在问题的多种方法,包括自然语言处理和网络分析。此外,提出了一些改善本体论及其质量保证过程的建议。有人认为,尽管前进的自动工具用于本体质量保证对于本体论的改善至关重要,但它们并不能完全解决该问题。本体论重用是连续验证和改善本体质量以及指导其未来发展的最终方法。具体而言,可以通过实用和多样化的本体论点方案找到多个问题和修复。
translated by 谷歌翻译
庞大的科学出版物呈现出越来越大的挑战,找到与给定的研究问题相关的那些,并在其基础上做出明智的决定。如果不使用自动化工具,这变得非常困难。在这里,一个可能的改进区域是根据其主题自动分类出版物摘要。这项工作介绍了一种新颖的知识基础的出色出版物分类器。该方法侧重于实现可扩展性和对其他域的容易适应性。在非常苛刻的食品安全领域,分类速度和准确度被证明是令人满意的。需要进一步发展和评估该方法,因为所提出的方法显示出很大的潜力。
translated by 谷歌翻译
Many challenging reinforcement learning (RL) problems require designing a distribution of tasks that can be applied to train effective policies. This distribution of tasks can be specified by the curriculum. A curriculum is meant to improve the results of learning and accelerate it. We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning, where a task sequence is created based on the success rate of each task. In this setting, each task is an algorithmically created environment instance with a unique configuration. The algorithm selects the order of tasks that provide the fastest learning for agents. The probability of selecting any of the tasks for the next stage of learning is determined by evaluating its performance score in previous stages. Experiments were carried out in the Partially Observable Grid Environment for Multiple Agents (POGEMA) and Procgen benchmark. We demonstrate that SITP matches or surpasses the results of other curriculum design methods. Our method can be implemented with handful of minor modifications to any standard RL framework and provides useful prioritization with minimal computational overhead.
translated by 谷歌翻译
This paper presents a solution to the GenChal 2022 shared task dedicated to feedback comment generation for writing learning. In terms of this task given a text with an error and a span of the error, a system generates an explanatory note that helps the writer (language learner) to improve their writing skills. Our solution is based on fine-tuning the T5 model on the initial dataset augmented according to syntactical dependencies of the words located within indicated error span. The solution of our team "nigula" obtained second place according to manual evaluation by the organizers.
translated by 谷歌翻译
The task of reconstructing 3D human motion has wideranging applications. The gold standard Motion capture (MoCap) systems are accurate but inaccessible to the general public due to their cost, hardware and space constraints. In contrast, monocular human mesh recovery (HMR) methods are much more accessible than MoCap as they take single-view videos as inputs. Replacing the multi-view Mo- Cap systems with a monocular HMR method would break the current barriers to collecting accurate 3D motion thus making exciting applications like motion analysis and motiondriven animation accessible to the general public. However, performance of existing HMR methods degrade when the video contains challenging and dynamic motion that is not in existing MoCap datasets used for training. This reduces its appeal as dynamic motion is frequently the target in 3D motion recovery in the aforementioned applications. Our study aims to bridge the gap between monocular HMR and multi-view MoCap systems by leveraging information shared across multiple video instances of the same action. We introduce the Neural Motion (NeMo) field. It is optimized to represent the underlying 3D motions across a set of videos of the same action. Empirically, we show that NeMo can recover 3D motion in sports using videos from the Penn Action dataset, where NeMo outperforms existing HMR methods in terms of 2D keypoint detection. To further validate NeMo using 3D metrics, we collected a small MoCap dataset mimicking actions in Penn Action,and show that NeMo achieves better 3D reconstruction compared to various baselines.
translated by 谷歌翻译
Model calibration, which is concerned with how frequently the model predicts correctly, not only plays a vital part in statistical model design, but also has substantial practical applications, such as optimal decision-making in the real world. However, it has been discovered that modern deep neural networks are generally poorly calibrated due to the overestimation (or underestimation) of predictive confidence, which is closely related to overfitting. In this paper, we propose Annealing Double-Head, a simple-to-implement but highly effective architecture for calibrating the DNN during training. To be precise, we construct an additional calibration head-a shallow neural network that typically has one latent layer-on top of the last latent layer in the normal model to map the logits to the aligned confidence. Furthermore, a simple Annealing technique that dynamically scales the logits by calibration head in training procedure is developed to improve its performance. Under both the in-distribution and distributional shift circumstances, we exhaustively evaluate our Annealing Double-Head architecture on multiple pairs of contemporary DNN architectures and vision and speech datasets. We demonstrate that our method achieves state-of-the-art model calibration performance without post-processing while simultaneously providing comparable predictive accuracy in comparison to other recently proposed calibration methods on a range of learning tasks.
translated by 谷歌翻译
Dense prediction tasks such as segmentation and detection of pathological entities hold crucial clinical value in the digital pathology workflow. However, obtaining dense annotations on large cohorts is usually tedious and expensive. Contrastive learning (CL) is thus often employed to leverage large volumes of unlabeled data to pre-train the backbone network. To boost CL for dense prediction, some studies have proposed variations of dense matching objectives in pre-training. However, our analysis shows that employing existing dense matching strategies on histopathology images enforces invariance among incorrect pairs of dense features and, thus, is imprecise. To address this, we propose a precise location-based matching mechanism that utilizes the overlapping information between geometric transformations to precisely match regions in two augmentations. Extensive experiments on two pretraining datasets (TCGA-BRCA, NCT-CRC-HE) and three downstream datasets (GlaS, CRAG, BCSS) highlight the superiority of our method in semantic and instance segmentation tasks. Our method outperforms previous dense matching methods by up to 7.2 % in average precision for detection and 5.6 % in average precision for instance segmentation tasks. Additionally, by using our matching mechanism in the three popular contrastive learning frameworks, MoCo-v2, VICRegL and ConCL, the average precision in detection is improved by 0.7 % to 5.2 % and the average precision in segmentation is improved by 0.7 % to 4.0 %, demonstrating its generalizability.
translated by 谷歌翻译